425 research outputs found
High-Isolation Dual-Polarized Microstrip Antenna via Substrate Integrated Waveguide Technology
A dual-polarized microstrip antenna with high-isolation is proposed by the utilization of the substrate-integrated waveguide (SIW) technology. According to the SIW technology, the metalized holes (MHs) are inserted into the substrate for the proposed antenna and the electric fields of the feeding parts are enclosed, so the isolation of the antenna is enhanced. The bandwidth is improved due to the MHs in the four sides of the antenna. A prototype of the proposed antenna has been fabricated and measured. Experimental results indicate that the antenna obtains the isolation more than 40 dB and achieves the impedance bandwidth of 21.9% and 23.8%(11.8-14.6 GHz and 11.65-14.8 GHz for two ports) of the reflection coefficients less than -20 dB. The cross polarization with the main lobe remains less than -30 dB and the half-power beam width is about 70° for the proposed antenna. Meanwhile, the front-to-back ratio remains to be better than 20 dB. A good agreement between the measured and simulated results validates the proposed design
FEAFA: A Well-Annotated Dataset for Facial Expression Analysis and 3D Facial Animation
Facial expression analysis based on machine learning requires large number of
well-annotated data to reflect different changes in facial motion. Publicly
available datasets truly help to accelerate research in this area by providing
a benchmark resource, but all of these datasets, to the best of our knowledge,
are limited to rough annotations for action units, including only their
absence, presence, or a five-level intensity according to the Facial Action
Coding System. To meet the need for videos labeled in great detail, we present
a well-annotated dataset named FEAFA for Facial Expression Analysis and 3D
Facial Animation. One hundred and twenty-two participants, including children,
young adults and elderly people, were recorded in real-world conditions. In
addition, 99,356 frames were manually labeled using Expression Quantitative
Tool developed by us to quantify 9 symmetrical FACS action units, 10
asymmetrical (unilateral) FACS action units, 2 symmetrical FACS action
descriptors and 2 asymmetrical FACS action descriptors, and each action unit or
action descriptor is well-annotated with a floating point number between 0 and
1. To provide a baseline for use in future research, a benchmark for the
regression of action unit values based on Convolutional Neural Networks are
presented. We also demonstrate the potential of our FEAFA dataset for 3D facial
animation. Almost all state-of-the-art algorithms for facial animation are
achieved based on 3D face reconstruction. We hence propose a novel method that
drives virtual characters only based on action unit value regression of the 2D
video frames of source actors.Comment: 9 pages, 7 figure
Social contagions on interdependent lattice networks
Although an increasing amount of research is being done on the dynamical processes on interdependent spatial networks, knowledge of how interdependent spatial networks influence the dynamics of social contagion in them is sparse. Here we present a novel non-Markovian social contagion model on interdependent spatial networks composed of two identical two-dimensional lattices. We compare the dynamics of social contagion on networks with different fractions of dependency links and find that the density of final recovered nodes increases as the number of dependency links is increased. We use a finite-size analysis method to identify the type of phase transition in the giant connected components (GCC) of the final adopted nodes and find that as we increase the fraction of dependency links, the phase transition switches from second-order to first-order. In strong interdependent spatial networks with abundant dependency links, increasing the fraction of initial adopted nodes can induce the switch from a first-order to second-order phase transition associated with social contagion dynamics. In networks with a small number of dependency links, the phase transition remains second-order. In addition, both the second-order and first-order phase transition points can be decreased by increasing the fraction of dependency links or the number of initially-adopted nodes.This work was partially supported by National Natural Science Foundation of China (Grant Nos 61501358, 61673085), and the Fundamental Research Funds for the Central Universities. (61501358 - National Natural Science Foundation of China; 61673085 - National Natural Science Foundation of China; Fundamental Research Funds for the Central Universities)Published versio
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
Transfer learning has fundamentally changed the landscape of natural language
processing (NLP) research. Many existing state-of-the-art models are first
pre-trained on a large text corpus and then fine-tuned on downstream tasks.
However, due to limited data resources from downstream tasks and the extremely
large capacity of pre-trained models, aggressive fine-tuning often causes the
adapted model to overfit the data of downstream tasks and forget the knowledge
of the pre-trained model. To address the above issue in a more principled
manner, we propose a new computational framework for robust and efficient
fine-tuning for pre-trained language models. Specifically, our proposed
framework contains two important ingredients: 1. Smoothness-inducing
regularization, which effectively manages the capacity of the model; 2. Bregman
proximal point optimization, which is a class of trust-region methods and can
prevent knowledge forgetting. Our experiments demonstrate that our proposed
method achieves the state-of-the-art performance on multiple NLP benchmarks.Comment: The 58th annual meeting of the Association for Computational
Linguistics (ACL 2020
- …